916 research outputs found

    Echo chamber effects based on a novel three-dimensional Deffuant-Weisbuch model

    Full text link
    In order to solve the problem of opinion polarization and distortion caused by echo chamber effect in the evolution process of online public opinion,a three-dimensional Deffuant-Weisbuch model is proposed to study the formation and elimination of echo chamber effect in this paper. Firstly, the original pairwise interaction model is generalized to three-point interaction model. Secondly, we consider individual psychological mechanism and introduce individual emotional factor into the trust threshold of original model. Finally, the natural evolution coefficient of opinion is introduced to modify the model. The improved model is used to conduct simulation experiments on social networks with different structures, and opinion leaders and active agents are introduced into the network, so as to study the corresponding generation and breaking mechanism of echo chamber. The experimental results show that the change of network structure cannot eliminate the echo chamber effect, and the increase of network stability and connectivity can only slow down the echo chamber effect. Opinion leaders can aggregate opinions within their scope of influence and have a guiding effect on opinions. Therefore, if opinion leaders can change their opinions over time, they can well guide opinions to converge to neutral opinions, thus achieving the purpose of breaking the echo chamber. Active agents can lead the opinions in the network to converge to the neutral, and active agents with high stubbornness can lead the free views to converge to the neutral, thus achieving the purpose of breaking the echo chamber effect.Comment: 34pages 57figure

    Proving Expected Sensitivity of Probabilistic Programs with Randomized Variable-Dependent Termination Time

    Get PDF
    The notion of program sensitivity (aka Lipschitz continuity) specifies that changes in the program input result in proportional changes to the program output. For probabilistic programs the notion is naturally extended to expected sensitivity. A previous approach develops a relational program logic framework for proving expected sensitivity of probabilistic while loops, where the number of iterations is fixed and bounded. In this work, we consider probabilistic while loops where the number of iterations is not fixed, but randomized and depends on the initial input values. We present a sound approach for proving expected sensitivity of such programs. Our sound approach is martingale-based and can be automated through existing martingale-synthesis algorithms. Furthermore, our approach is compositional for sequential composition of while loops under a mild side condition. We demonstrate the effectiveness of our approach on several classical examples from Gambler's Ruin, stochastic hybrid systems and stochastic gradient descent. We also present experimental results showing that our automated approach can handle various probabilistic programs in the literature

    Some new decay estimates for (2+1)(2+1)-dimensional degenerate oscillatory integral operators

    Full text link
    In this paper, we consider the (2+1)−(2+1)-dimensional oscillatory integral operators with cubic homogeneous polynomial phases, which are degenerate in the sense of \cite{Tan06}. We improve the previously known L2→L2L^2\to L^2 decay rate to 3/83/8 and also establish a sharp L2→L6L^2\to L^6 decay estimate based on fractional integration method.Comment: This new version adds some missed argument and correct some typo

    The Application of Two-level Attention Models in Deep Convolutional Neural Network for Fine-grained Image Classification

    Full text link
    Fine-grained classification is challenging because categories can only be discriminated by subtle and local differences. Variances in the pose, scale or rotation usually make the problem more difficult. Most fine-grained classification systems follow the pipeline of finding foreground object or object parts (where) to extract discriminative features (what). In this paper, we propose to apply visual attention to fine-grained classification task using deep neural network. Our pipeline integrates three types of attention: the bottom-up attention that propose candidate patches, the object-level top-down attention that selects relevant patches to a certain object, and the part-level top-down attention that localizes discriminative parts. We combine these attentions to train domain-specific deep nets, then use it to improve both the what and where aspects. Importantly, we avoid using expensive annotations like bounding box or part information from end-to-end. The weak supervision constraint makes our work easier to generalize. We have verified the effectiveness of the method on the subsets of ILSVRC2012 dataset and CUB200_2011 dataset. Our pipeline delivered significant improvements and achieved the best accuracy under the weakest supervision condition. The performance is competitive against other methods that rely on additional annotations

    Federated Neural Architecture Search

    Full text link
    To preserve user privacy while enabling mobile intelligence, techniques have been proposed to train deep neural networks on decentralized data. However, training over decentralized data makes the design of neural architecture quite difficult as it already was. Such difficulty is further amplified when designing and deploying different neural architectures for heterogeneous mobile platforms. In this work, we propose an automatic neural architecture search into the decentralized training, as a new DNN training paradigm called Federated Neural Architecture Search, namely federated NAS. To deal with the primary challenge of limited on-client computational and communication resources, we present FedNAS, a highly optimized framework for efficient federated NAS. FedNAS fully exploits the key opportunity of insufficient model candidate re-training during the architecture search process, and incorporates three key optimizations: parallel candidates training on partial clients, early dropping candidates with inferior performance, and dynamic round numbers. Tested on large-scale datasets and typical CNN architectures, FedNAS achieves comparable model accuracy as state-of-the-art NAS algorithm that trains models with centralized data, and also reduces the client cost by up to two orders of magnitude compared to a straightforward design of federated NAS

    Local Reasoning about Probabilistic Behaviour for Classical-Quantum Programs

    Full text link
    Verifying the functional correctness of programs with both classical and quantum constructs is a challenging task. The presence of probabilistic behaviour entailed by quantum measurements and unbounded while loops complicate the verification task greatly. We propose a new quantum Hoare logic for local reasoning about probabilistic behaviour by introducing distribution formulas to specify probabilistic properties. We show that the proof rules in the logic are sound with respect to a denotational semantics. To demonstrate the effectiveness of the logic, we formally verify the correctness of non-trivial quantum algorithms including the HHL and Shor's algorithms.Comment: 27 pages. arXiv admin note: text overlap with arXiv:2107.0080

    Probabilistic spectrum Gaussian noise estimate for random bandwidth traffic

    Get PDF
    A probabilistic spectrum Gaussian noise (PSGN) model is proposed to predict the nonlinear noise for random bandwidth traffic in long-haul elastic optical networks. The model reduces the noise estimate 9.1% on average compared to the standard Gaussian noise model applied to the maximum bandwidth
    • …
    corecore